Multiple optimal learning factors for feed-forward networks
نویسندگان
چکیده
A batch training algorithm for feed-forward networks is proposed which uses Newton’s method to estimate a vector of optimal learning factors, one for each hidden unit. Backpropagation, using this learning factor vector, is used to modify the hidden unit’s input weights. Linear equations are then solved for the network’s output weights. Elements of the new method’s Gauss-Newton Hessian matrix are shown to be weighted sums of elements from the total network’s Hessian. In several examples, the new method performs better than backpropagation and conjugate gradient, with similar numbers of required multiplies. The method performs as well as or better than Levenberg-Marquardt, with several orders of magnitude fewer multiplies due to the small size of its Hessian.
منابع مشابه
An Unsupervised Learning Method for an Attacker Agent in Robot Soccer Competitions Based on the Kohonen Neural Network
RoboCup competition as a great test-bed, has turned to a worldwide popular domains in recent years. The main object of such competitions is to deal with complex behavior of systems whichconsist of multiple autonomous agents. The rich experience of human soccer player can be used as a valuable reference for a robot soccer player. However, because of the differences between real and simulated soc...
متن کاملبهینه سازی فرآیند با چند سطح پاسخ به وسیله شبکههای عصبی برمبنای مفهوم مطلوبیت
In this paper, a method is proposed for Multiple Response Optimization (MRO) by neural networks and uses desirability of each response for forecasting. The used neural network is a feed forward back propagation one with two hidden layers. The numbers of neurons in the hidden layers are determined using MSE criterion for training and test data. The numbers on neurons of the first layer last laye...
متن کاملEffect of sound classification by neural networks in the recognition of human hearing
In this paper, we focus on two basic issues: (a) the classification of sound by neural networks based on frequency and sound intensity parameters (b) evaluating the health of different human ears as compared to of those a healthy person. Sound classification by a specific feed forward neural network with two inputs as frequency and sound intensity and two hidden layers is proposed. This process...
متن کاملOptimal Convergence Rate in Feed Forward Neural Networks using HJB Equation
A control theoretic approach is presented in this paper for both batch and instantaneous updates of weights in feed-forward neural networks. The popular Hamilton-JacobiBellman (HJB) equation has been used to generate an optimal weight update law. The remarkable contribution in this paper is that closed form solutions for both optimal cost and weight update can be achieved for any feed-forward n...
متن کاملExploring Optimal Architecture of Multi-layered Feed- forward (MLFNN) as Bidirectional Associative Memory (BAM) for Function Approximation
Function approximation is an instance of supervised learning which is one of the most studied topics in machine learning, artificial neural networks, pattern recognition, and statistical curve fitting. In principle, any of the methods studied in these fields can be used in reinforcement learning. Multi-layered feed-forward neural networks (MLFNN) have been extensively used for the purpose of fu...
متن کامل